User Adoption of Customer Reference Technology

Customer reference technology—a software application to manage your customer reference program—is essential for today’s B2B companies. This post is a companion post to the User Adoption of Customer Reference Programs post. We recommend reading that first because the program itself must be adopted before any technology is adopted.

No matter how elegant the solution, customer reference management technology is only useful when users a) know about it, b) know how to use it, and c) find what they need when they use it. The overriding requirement is that the technology is easier, faster and yields better outcomes than the alternatives.

The right technology can really help scale your customer reference program without proportionately growing the team needed to support it. The technology should replace both spreadsheet data management and reference requesting via indiscriminate and inefficient email blasts or Slack, Chatter, or Teams posts to which most companies are resigned.

At a minimum, customer reference technology should offer end users:

  • One place to search for customer advocates & customer content
  • Multiple ways (and sufficient granularity) to filter results based on opportunity criteria
  • Automation of reference requests
  • Automation of customer content sharing, and user engagement (click) tracking

Save time, reduce hassle, and make it easier and faster to get customer perspectives to buyers. With such basic reference needs addressed adoption should be a no-brainer, right?

The reality about gaining user adoption for any software solution is that change management is always involved. Even though a salesperson can search a set of pre-qualified customer advocates and find a highly relevant customer reference match, the old habit of messaging the whole sales team—Spray and pray—is hard to break.

Method 1: Send a message to ALL SALES or All CSMs. A little time on the front end, but effort produces unreliable results—ultimately more work (and stress) in the long run.

Method 2: Search reference-specific database. A little search time on the front end, with minimal effort on the back-end and high-probability, relevant results.

This video explains and compares the two methods in <2 minutes.

Defining Reference Technology Adoption

First, how to measure program success related to the system? Is it:

  • The percentage of users who use the software on a monthly basis?
  • The number of requests submitted each month?
  • The amount of ARR or MRR influenced monthly or quarterly?
  • The percentage of requests that are submitted and fulfilled?
  • The percentage of searches that produce relevant/useful advocate or content matches?
  • The number of advocate nominations (i.e., recruiting)?
  • User feedback related to time savings?
  • The impact the technology has on compressing (accelerating) the reference part of the sales cycle?
  • The impact on win-rates due to the quality of references presented to buyers, compared to the competition?

All of these indicators can have a place in program measurement. But for some, context is has added importance. For example, the percentage of users  who use the software on a monthly basis. To be useful, this percentage must be calibrated to the following:

  • The average number of deals per month that a rep closes
    Lower deal sizes generally translates to higher volume (hundreds per month). Higher deal sizes, lower volume (tens per month).
  • The percentage of deals that require references to close
    Not every deal needs references. If your company is the industry standard that percentage will be lower. If you’re new, or selling new products, those will need more social proof to close.
  • The tenure of the salesperson
    More established salespeople will have accrued a reference pool of their own and not feel compelled to use a reference tool, until they need a reference unlike any in their “back pocket.” Newer salespeople who need all the assistance available, will rely heavily on a reference tool because they don’t have other options.

Here are two scenarios that yield different expectations regarding this metric.

Company A

Reps close, on average, 1 deal per month averaging $340,000/deal. There are 100 reps. Approximately 75% of buyers request references. The rest have, perhaps, had previous experience with the solution, found sufficient social proof online, in network, from analysts, etc.

The volume of opportunities requiring references is 75. Let’s presume that that reference database has the potential—both the quantity and quality needed—to support all 75 requests. Next, how does the tenure of the reps break out? Let’s use the assumption that 50 of the reps have a tenure of 5 or more years and can source references from their own “portfolio,” and 50 are 2 years or less. The result is that approximately 38 users will make 1 reference request (perhaps for more than one account) per month. That’s 38 total requests in total each month.

Company B

Each rep closes, on average, 5 deals per month, averaging $28,000/deal. There are 400 reps. Approximately 62% of buyers request references. The rest may have taken advantage of a 20-day free trial, or found sufficient social proof from online product reviews or company-generated content (e.g., ROI studies). Let’s presume that that reference database has the potential—both the quantity and quality needed—to support all 1,240 requests. Looking at tenure, let’s say 300 of the reps have been at the company for 5+ years, and 100 for 2 years or less. The result in this case is that 100 users will make 5 reference requests per month, or 500 requests in total each month.

You can see how different the calculations are in these two different companies and how setting expectations and program goals require this level of understanding.

At the other end of the spectrum of value assessment, if the program and system of choice influences $100M per year and costs less than $1M per year, most bean counters would call that a high return investment, regardless of usage volumetrics.

But let’s look at those activity-based metrics. In our application there are 10 end-user actions that indicate adoption (request submissions, nominations, content sharing invitations sent, etc.). Some are weighted higher than others based on effort and value. Some occur more frequently than others. Start by creating that list and assigning relative value, in the form of points, to the actions. Next, set some expectations that define low, moderate and high adoption. But don’t do this in a vacuum. Use these strategies to reach and exceed your adoption goals.

7 Proven System Adoption Strategies for Customer Reference Software

Program Advisory Board

Assemble a group of 10-12 stakeholders from sales, customer success, demand gen, PR, etc., which will be your litmus test for program ideas in general and technology decisions specifically. The board will provide real-world input that will inform decisions on system configuration (such as search filters), data needs, and in setting expectations around user adoption metrics. For instance, you might think that a typical salesperson needs 2-3 references at least 2 times per month. The board may tell you it’s more like 4 times per month based on length of sales cycles and pipeline volume, and never are more than 2 customers requested. And you may learn that only about 70% of opportunities request to speak with customers, and you thought the number was 100%. Your expectations need to adjust. With so many competing sales tools and responsibilities it’s important to get things right the first time. The board will allow you to gauge how configuration changes, implementing new features or changing processes will be received by the field. Members of the board are also ambassadors for the program and the technology, and should be identified as such to peers. They are both informally aggregating feedback and providing assistance when no one from the customer reference program is around.

Source Consolidation

This may seem obvious. You’ve identified the need to centralize customer reference data and perhaps customer content. But sometimes letting go of old spreadsheets, wikis, and cloud collaboration folders (e.g., Dropbox) with many unorganized files is hard to enforce. As long as those other sources exist, your customer reference management software will not achieve its rightful place as the master source. Often the game of wack a mole needs executive intervention. More on that later.


Any marketer would consider this a standard component of any campaign. But it’s not always given proper emphasis in this context. Users have to know about a system in order to use it. The reference program, in general, may be a foreign concept to at least some of the users. Introducing a program and supporting technology needs to have a promotion plan and calendar. Promotion starts with communications to all users coming from top leadership. Then program needs to develop a cadence of sharing program wins (“$1M deal closed with ABC Corporation with the support of 3 high-impact reference calls”). Pepper those communications with tips for leveraging reference customers, and technology, for maximum effect.

Data Quality & Ownership

We view quality as a combination of accuracy, completeness and currency. Poor data quality is a showstopper. You can have an intuitive tool with a pretty UI, but if the data is unreliable, users will fall back on whatever they can to get what they need. And who is best positioned to ensure data quality? The resources closest to the account relationship: sales or customer success/account management. Many companies have not yet come to the conclusion that customer reference intelligence is a shared responsibility between the relationship owners and marketing, but it’s a mistake not to. Our application includes elegant automation to ensure reference information is periodically reviewed and updated, but relationship owners have to be invested—this is a team sport. What about customer content? It needs to be properly tagged so that it can be found even when keywords don’t appear in the body of a document. And tagging content by sales stage also guides users to the most appropriate assets. This generally falls to the content creation team.

Make it Fun!

It’s well documented that salespeople like competition and recognition. The long running debate about whether or not to incentivize salespeople for “doing their jobs” is old thinking. If you want rapid adoption then reward the desired behaviors and make sure the game leader board is prominent. Reward points can be issued for helping with a request, updating customer reference information, nominating a customer, etc. The more referenceable a salesperson’s account is the more reward points she should receive. Your advisory board can help you determine what will “move the needle” in your culture.

Training & Education

If only all business software was as easy to use as Google. Software can do a lot to simplify and automate complex functions, but there is still a need for training. While recorded training is available in corporate learning systems, our experience is that unless mandatory, they aren’t watched. Retention, even when watched, is debatable. We are big proponents of offering recurring (usually monthly) just-in-time live training sessions that are well promoted. This is to address the ebb and flow (i.e., churn) of the sales team, and the nature of humans to seek training when they have a need, not proactively when it’s low priority. Awareness of the customer reference program and the customer reference management application should be part of the new-hire on-boarding for applicable roles. New-hires need both more than veteran salespeople at your company.

The big picture is also important. Sales leadership should also make it crystal clear why using the customer reference software is not optional. If salespeople aren’t savvy enough to use customer advocates in nearly every deal, then the organization needs to explain how customers support quota attainment and company growth goals. Without technology, the process is inefficient, error fraught, and un-measurable.

Functional Integration

Customer reference software should not be on an island by itself. CRM is an obvious intersection point. But what other systems feed into or could benefit from customer reference system data? Any sales enablement tool that includes playbooks should have easy access to customer reference search: content for early stages, reference contacts at latter stages. Marketing automation tools manage campaigns that offer customer content or access to customer contacts (e.g., Webinars, events). The connection between campaigns and customer reference assets is part of the revenue influenced metric. Customer success apps that monitor customer health can both leverage customer advocate activity data and drive customer advocate status (e.g., unhappy account = inactive reference account). The tighter the integration to the big picture the better user adoption of any tools in the ecosystem.

This post is related to the broader goal of User Adoption of Customer Reference Programs. Having leadership support and the right program manager also determine adoption of the technology. If you’re building a case for your program checkout our business case checklist. We also have an infographic featuring report findings and stats from analysts that will reinforce your case with expert perspectives. Visit our Resources page for other useful information and tools.